algorithmic harm
Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems
Ahn, Yongsu, Wolter, Quinn K, Dick, Jonilyn, Dick, Janet, Lin, Yu-Ru
Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, this tool benefits both general users and researchers by increasing transparency and offering personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.
- North America > Mexico (0.04)
- Europe > Portugal (0.04)
- Asia > Taiwan (0.04)
- Asia > South Korea (0.04)
- Research Report (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.46)
Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm
Lima, Gabriel, Grgić-Hlača, Nina, Cha, Meeyoung
Artificial intelligence (AI) systems can cause harm to people. This research examines how individuals react to such harm through the lens of blame. Building upon research suggesting that people blame AI systems, we investigated how several factors influence people's reactive attitudes towards machines, designers, and users. The results of three studies (N = 1,153) indicate differences in how blame is attributed to these actors. Whether AI systems were explainable did not impact blame directed at them, their developers, and their users. Considerations about fairness and harmfulness increased blame towards designers and users but had little to no effect on judgments of AI systems. Instead, what determined people's reactive attitudes towards machines was whether people thought blaming them would be a suitable response to algorithmic harm. We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople's reactions to AI-caused harm.
- Europe > Germany > Hamburg (0.05)
- North America > United States > Arizona (0.04)
- North America > United States > New York (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.68)
- Transportation > Ground > Road (0.67)
AI bias is rampant. Bug bounties could help catch it.
The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s. Back then, some companies found they could actually make themselves safer by incentivizing the work of independent "white hat" security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That's how the practice of bug bounties became a cornerstone of cybersecurity today. In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in -- this time, by putting bounties on harms that might originate in their artificial intelligence systems. François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate's probe of Russia's attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and "combines art and research to illuminate the social implications and harms of artificial intelligence."
- North America > United States (1.00)
- Europe > Russia (0.25)
- Asia > Russia (0.25)
The new weapon in the fight against biased algorithms: Bug bounties
When it comes to detecting bias in algorithms, researchers are trying to learn from the information security field – and particularly, from the bug bounty-hunting hackers who comb through software code to identify potential security vulnerabilities. The parallels between the work of these security researchers and the hunt for possible flaws in AI models, in fact, is at the heart of the work carried out by Deborah Raji, a research fellow in algorithmic harms for the Mozilla Foundation. Presenting the research she has been carrying out with advocacy group the Algorithmic Justice League (AJL) during the annual Mozilla Festival, Raji explained how along with her team, she has been studying bug bounty programs to see how they could be applied to the detection of a different type of nuisance: algorithmic bias. SEE: An IT pro's guide to robotic process automation (free PDF) (TechRepublic) Bug bounties, which reward hackers for discovering vulnerabilities in software code before malicious actors exploit them, have become an integral part of the information security field. Major companies such as Google, Facebook or Microsoft now all run bug bounty programs; the number of these hackers is multiplying, and so are the financial rewards that corporations are ready to pay to fix software problems before malicious hackers find them.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Robots (0.55)
- Information Technology > Communications > Social Media (0.46)
How can we stop algorithms telling lies?
Lots of algorithms go bad unintentionally. Some of them, however, are made to be criminal. Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success. We've seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move.
- Asia > China (0.05)
- Oceania > Australia (0.04)
- North America > United States > West Virginia (0.04)
- (4 more...)
- Banking & Finance (1.00)
- Law (0.96)
- Information Technology > Services (0.47)
- (4 more...)